Goto

Collaborating Authors

 consolidated internal distribution


Appendix: Lifelong Domain Adaptation via Consolidated Internal Distribution

Neural Information Processing Systems

In Figure 1, we see the high-level description of the lifelong UDA approach. Lifelong learning is an iterative process in which the model is updated persistently. Upon training on a source domain with labeled data, the input data is transformed into a multi-modal distribution in the embedding space.


Lifelong Domain Adaptation via Consolidated Internal Distribution

Neural Information Processing Systems

We develop an algorithm to address unsupervised domain adaptation (UDA) in continual learning (CL) settings. The goal is to update a model continually to learn distributional shifts across sequentially arriving tasks with unlabeled data while retaining the knowledge about the past learned tasks. Existing UDA algorithms address the challenge of domain shift, but they require simultaneous access to the datasets of the source and the target domains. On the other hand, existing works on CL can handle tasks with labeled data. Our solution is based on consolidating the learned internal distribution for improved model generalization on new domains and benefitting from experience replay to overcome catastrophic forgetting.